Goto

Collaborating Authors

 privacy law


Surveillance and ICE Are Driving Patients Away From Medical Care, Report Warns

WIRED

A new EPIC report says data brokers, ad-tech surveillance, and ICE enforcement are among the factors leading to a "health privacy crisis" that is eroding trust and deterring people from seeking care. When immigration agents enter hospitals and private companies are allowed to buy and sell data that reveals who seeks medical care, patients retreat, treatment is delayed, and health outcomes worsen, according to a new report that describes a growing "health privacy crisis" in the United States driven by surveillance and weak law enforcement limits. The report, published by the Electronic Privacy Information Center (EPIC), attributes the problem to outdated privacy laws and rapidly expanding digital systems that allow health-related information to be tracked, analyzed, breached, and accessed by both private companies and government agencies. EPIC, a Washington-based nonprofit focused on privacy and civil liberties, based its findings on a review of federal and state laws, court rulings, agency policies, technical research, and documented case studies examining how health data is collected, shared, and used across government and commercial systems. "Unregulated digital technologies, mass surveillance, and weak privacy laws have created a health privacy crisis," the report says.


Automated Boilerplate: Prevalence and Quality of Contract Generators in the Context of Swiss Privacy Policies

Nenadic, Luka, Rodriguez, David

arXiv.org Artificial Intelligence

It has become increasingly challenging for firms to comply with a plethora of novel digital regulations. This is especially true for smaller businesses that often lack both the resources and know-how to draft complex legal documents. Instead of seeking costly legal advice from attorneys, firms may turn to cheaper alternative legal service providers such as automated contract generators. While these services have a long-standing presence, there is little empirical evidence on their prevalence and output quality. We address this gap in the context of a 2023 Swiss privacy law revision. To enable a systematic evaluation, we create and annotate a multilingual benchmark dataset that captures key compliance obligations under Swiss and EU privacy law. Using this dataset, we validate a novel GPT-5-based method for large-scale compliance assessment of privacy policies, allowing us to measure the impact of the revision. We observe compliance increases indicating an effect of the revision. Generators, explicitly referenced by 18% of local websites, are associated with substantially higher levels of compliance, with increases of up to 15 percentage points compared to privacy policies without generator use. These findings contribute to three debates: the potential of LLMs for cross-lingual legal analysis, the Brussels Effect of EU regulations, and, crucially, the role of automated tools in improving compliance and contractual quality.


Can We Trust AI to Govern AI? Benchmarking LLM Performance on Privacy and AI Governance Exams

Witherspoon, Zane, Aye, Thet Mon, Hao, YingYing

arXiv.org Artificial Intelligence

The rapid emergence of large language models (LLMs) has raised urgent questions across the modern workforce about this new technology's strengths, weaknesses, and capabilities. For privacy professionals, the question is whether these AI systems can provide reliable support on regulatory compliance, privacy program management, and AI governance. In this study, we evaluate ten leading open and closed LLMs, including models from OpenAI, Anthropic, Google DeepMind, Meta, and DeepSeek, by benchmarking their performance on industry-standard certification exams: CIPP/US, CIPM, CIPT, and AIGP from the International Association of Privacy Professionals (IAPP). Each model was tested using official sample exams in a closed-book setting and compared to IAPP's passing thresholds. Our findings show that several frontier models such as Gemini 2.5 Pro and OpenAI's GPT-5 consistently achieve scores exceeding the standards for professional human certification - demonstrating substantial expertise in privacy law, technical controls, and AI governance. The results highlight both the strengths and domain-specific gaps of current LLMs and offer practical insights for privacy officers, compliance leads, and technologists assessing the readiness of AI tools for high-stakes data governance roles. This paper provides an overview for professionals navigating the intersection of AI advancement and regulatory risk and establishes a machine benchmark based on human-centric evaluations.


A Controversial Facial-Recognition Company Quietly Expands Into Latin America

TIME - Tech

For the past three months, a small encrypted group chat of Latin American officials who investigate online child-exploitation cases has been lighting up with reports of raids, arrests, and rescued minors in half a dozen countries. The successes are the result of a recent trial of a facial-recognition tool given to a group of Latin American law-enforcement officials, investigators, and prosecutors by the American company Clearview AI. During a five-day operation in Ecuador in early March, participants from 10 countries including Argentina, Brazil, Colombia, the Dominican Republic, El Salvador, and Peru were given access to Clearview's technology, which allows them to upload images and run them through a database of billions of public photos scraped from the Internet. "Normally it takes at least several days for a child to be identified, and sometimes there are victims that have not been identified for years," says Guillermo Galarza Abizaid, the vice president in charge of partnerships and law enforcement at the Virginia-based nonprofit International Centre for Missing and Exploited Children (ICMEC), which organized the event. The group used the facial-recognition tool to analyze a total of 2,198 images and 995 videos, hundreds of them from cold cases.


GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory

Fan, Wei, Li, Haoran, Deng, Zheye, Wang, Weiqi, Song, Yangqiu

arXiv.org Artificial Intelligence

Privacy issues arise prominently during the inappropriate transmission of information between entities. Existing research primarily studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns, while neglecting that privacy is not an isolated, context-free concept limited to traditionally sensitive data (e.g., social security numbers), but intertwined with intricate social contexts that complicate the identification and analysis of potential privacy violations. The advent of Large Language Models (LLMs) offers unprecedented opportunities for incorporating the nuanced scenarios outlined in privacy laws to tackle these complex privacy issues. However, the scarcity of open-source relevant case studies restricts the efficiency of LLMs in aligning with specific legal statutes. To address this challenge, we introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations. Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes (e.g., HIPAA), to assist LLMs in comprehending the complex contexts for identifying privacy risks in the real world. Extensive experimental results demonstrate that GoldCoin markedly enhances LLMs' capabilities in recognizing privacy risks across real court cases, surpassing the baselines on different judicial tasks.


Roku Breach Hits 567,000 Users

WIRED

After months of delays, the US House of Representatives voted on Friday to extend a controversial warrantless wiretap program for two years. Known as Section 702, the program authorizes the US government to collect the communications of foreigners overseas. But this collection also includes reams of communications from US citizens, which are stored for years and can later be warrantlessly accessed by the FBI, which has heavily abused the program. An amendment that would require investigators to obtain such a warrant failed to pass. A group of US lawmakers on Sunday unveiled a proposal that they hope will become the country's first nationwide privacy law.


Is my home spying on me? As smart devices move in, experts fear Australians are oversharing

The Guardian

Take a look around your home and chances are you have one, or at least you have considered the convenience of having one. They are the devices and appliances that can be remotely controlled – otherwise known as smart devices – which over the past decade have become core features of the modern home. Think of the TVs that allow you to flick through various streaming services, the smart fridges that can have their temperatures moderated and contents checked from afar, the robot vacuum, air purifiers, or one of the big tech companies' virtual helpers to play music or dim the lights. But as the technologies gather, share, aggregate and analyse the data collected, that convenience has come at a cost: privacy. Experts say consumers should be aware of how much personal information they are trading, and what that information is used for.


Immersive Tech Obscures Reality. AI Will Threaten It

WIRED

Last week, Amazon announced it was integrating AI into a number of products--including smart glasses, smart home systems, and its voice assistant, Alexa--that help users navigate the world. This week, Meta will unveil its latest AI and extended reality (XR) features, and next week Google will reveal its next line of Pixel phones equipped with Google AI. If you thought AI was already "revolutionary," just wait until it's part of the increasingly immersive responsive, personal devices that power our lives. AI is already hastening technology's trend toward greater immersion, blurring the boundaries between the physical and digital worlds and allowing users to easily create their own content. When combined with technologies like augmented or virtual reality, it will open up a world of creative possibilities, but also raise new issues related to privacy, manipulation, and safety.


'Lives are ruined in an afternoon': How social media shaped the Huw Edwards story

The Guardian

Social media imploded and the BBC practically ate itself last week as the scandal over Huw Edwards allegedly paying for explicit images from an unnamed young person unspooled. But what you knew, and when, depended largely on where you looked. Consume only traditional media – television, radio and newspapers and news websites like the Guardian – and you would not have had much inkling of who was in the frame until Edwards's wife named the BBC News presenter as the one at the centre of the storm. Sniff around social media, however, and you likely knew who was involved days before – and probably also thought a lot less of other names bandied about in connection with the concern. One former member of Twitter's curation team, who asked not to be named, believes the failure on Twitter's part was down to a combination of short-staffing and tech changes since Elon Musk took over.


Italian privacy regulator bans ChatGPT – POLITICO

#artificialintelligence

The Italian privacy regulator Friday ordered a ban on ChatGPT over alleged privacy violations. The national data protection authority said it will immediately block and investigate OpenAI, the U.S. company behind the popular artificial intelligence tool, from processing the data of Italian users. The order is temporary until the company respects the EU's landmark privacy law, the General Data Protection Regulation (GDPR). Calls to suspend new ChatGPT releases and investigate its maker OpenAI over a range of risks for privacy, cybersecurity and disinformation are growing on both sides of the Atlantic. Elon Musk and dozens of AI experts this week called for a pause to updates of ChatGPT.